336 research outputs found
Emergence of Grounded Compositional Language in Multi-Agent Populations
By capturing statistical patterns in large corpora, machine learning has
enabled significant advances in natural language processing, including in
machine translation, question answering, and sentiment analysis. However, for
agents to intelligently interact with humans, simply capturing the statistical
patterns is insufficient. In this paper we investigate if, and how, grounded
compositional language can emerge as a means to achieve goals in multi-agent
populations. Towards this end, we propose a multi-agent learning environment
and learning methods that bring about emergence of a basic compositional
language. This language is represented as streams of abstract discrete symbols
uttered by agents over time, but nonetheless has a coherent structure that
possesses a defined vocabulary and syntax. We also observe emergence of
non-verbal communication such as pointing and guiding when language
communication is unavailable
Learning Contact-Rich Manipulation Skills with Guided Policy Search
Autonomous learning of object manipulation skills can enable robots to
acquire rich behavioral repertoires that scale to the variety of objects found
in the real world. However, current motion skill learning methods typically
restrict the behavior to a compact, low-dimensional representation, limiting
its expressiveness and generality. In this paper, we extend a recently
developed policy search method \cite{la-lnnpg-14} and use it to learn a range
of dynamic manipulation behaviors with highly general policy representations,
without using known models or example demonstrations. Our approach learns a set
of trajectories for the desired motion skill by using iteratively refitted
time-varying linear models, and then unifies these trajectories into a single
control policy that can generalize to new situations. To enable this method to
run on a real robot, we introduce several improvements that reduce the sample
count and automate parameter selection. We show that our method can acquire
fast, fluent behaviors after only minutes of interaction time, and can learn
robust controllers for complex tasks, including putting together a toy
airplane, stacking tight-fitting lego blocks, placing wooden rings onto
tight-fitting pegs, inserting a shoe tree into a shoe, and screwing bottle caps
onto bottles
- …